An Approximate Quasi-Newton Bundle-Type Method for Nonsmooth Optimization
نویسندگان
چکیده
and Applied Analysis 3 rate of convergence under some additional assumptions, and it should be noted that we only use the approximate values of the objective function and its subgradients which makes the algorithm easier to implement. Some notations are listed below for presenting the algorithm. (i) ∂f(x) = {ξ ∈ Rn | f(z) ≥ f(x) + ξT(z − x), ∀z ∈ R}, the subdifferential of f at x, and each such ξ is called a subgradient of f at x. (ii) ∂ ε f(x) = {η ∈ Rn | f(z) ≥ f(x) + ηT(z − x) − ε}, the ε-subdifferential off at x, and each such η is called an ε-subgradient of f at x. (iii) p(x) = argmin z∈R n{f(z)+(2λ)−1 ‖z−x‖2}, the unique minimizer of (2). (iv) G(x) = λ−1(x − p(x)), the gradient of F at x. This paper is organized as follows: in Section 2, to approximate the unique minimizer p(x) of (2), we introduce the bundle idea, which uses approximate values of the objective function and its subgradients. The approximate quasiNewton bundle-type algorithm is presented in Section 3. In the last section, we prove the global convergence and, under additional assumptions, Q-superlinear convergence of the proposed algorithm. 2. The Approximation of p(x) Let x = xk and s = z − x, where xk is the current iterate point of AQNBT algorithm presented in Section 3, then (13) has the form F (xk) = min s∈R n {f (xk + s) + (2λ) −1 ‖s‖ 2} . (12) Nowwe consider approximatingf(x+s) by using the bundle idea. Suppose we have a bundle Jk generated sequentially starting from xk and possibly a subset of the previous set used to generate x.The bundle includes the data (zi, fi, ga(zi, ε i )), i ∈ J, where zi ∈ Rn, fi ∈ R, and ga(zi, ε i ) ∈ Rn satisfy f (zi) ≥ fi ≥ f (zi) − ε i , f (z) ≥ f i + ⟨ga (zi, ε i ) , z − zi⟩ , ∀z ∈ Rn. (13) Suppose that the elements in Jk can be arranged according to the order of their entering the bundle. Without loss of generality we may suppose Jk = {1, . . . , j}. ε i is updated by the rule ε i+1 = γε i , 0 < γ < 1, i ∈ J. The condition (13) means ga(zi, ε i ) ∈ ∂ ε i f(z), i ∈ J. By using the data in the bundle we construct a polyhedral function f a (xk + s) defined by
منابع مشابه
Quasi-Newton Bundle-Type Methods for Nondifferentiable Convex Optimization
In this paper we provide implementable methods for solving nondifferentiable convex optimization problems. A typical method minimizes an approximate Moreau–Yosida regularization using a quasi-Newton technique with inexact function and gradient values which are generated by a finite inner bundle algorithm. For a BFGS bundle-type method global and superlinear convergence results for the outer ite...
متن کاملLimited memory interior point bundle method for large inequality constrained nonsmooth minimization
Many practical optimization problems involve nonsmooth (that is, not necessarily differentiable) functions of hundreds or thousands of variables with various constraints. In this paper, we describe a new efficient adaptive limited memory interior point bundle method for large, possible nonconvex, nonsmooth inequality constrained optimization. The method is a hybrid of the nonsmooth variable met...
متن کاملA Free Line Search Steepest Descent Method for Solving Unconstrained Optimization Problems
In this paper, we solve unconstrained optimization problem using a free line search steepest descent method. First, we propose a double parameter scaled quasi Newton formula for calculating an approximation of the Hessian matrix. The approximation obtained from this formula is a positive definite matrix that is satisfied in the standard secant relation. We also show that the largest eigen value...
متن کاملA Quasi-Newton Approach to Nonsmooth Convex Optimization Problems in Machine Learning
We extend the well-known BFGS quasi-Newton method and its memory-limited variant LBFGS to the optimization of nonsmooth convex objectives. This is done in a rigorous fashion by generalizing three components of BFGS to subdifferentials: the local quadratic model, the identification of a descent direction, and the Wolfe line search conditions. We prove that under some technical conditions, the re...
متن کاملNew Quasi-Newton Optimization Methods for Machine Learning
This thesis develops new quasi-Newton optimization methods that exploit the wellstructured functional form of objective functions often encountered in machine learning, while still maintaining the solid foundation of the standard BFGS quasi-Newton method. In particular, our algorithms are tailored for two categories of machine learning problems: (1) regularized risk minimization problems with c...
متن کاملA Quasi-Newton Approach to Nonsmooth Convex Optimization A Quasi-Newton Approach to Nonsmooth Convex Optimization
We extend the well-known BFGS quasi-Newton method and its limited-memory variant (LBFGS) to the optimization of nonsmooth convex objectives. This is done in a rigorous fashion by generalizing three components of BFGS to subdifferentials: The local quadratic model, the identification of a descent direction, and the Wolfe line search conditions. We apply the resulting subLBFGS algorithm to L2-reg...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2014